Randomized Smoothing for Stochastic Optimization

نویسندگان

  • John C. Duchi
  • Peter L. Bartlett
  • Martin J. Wainwright
چکیده

We analyze convergence rates of stochastic optimization algorithms for nonsmooth convex optimization problems. By combining randomized smoothing techniques with accelerated gradient methods, we obtain convergence rates of stochastic optimization procedures, both in expectation and with high probability, that have optimal dependence on the variance of the gradient estimates. To the best of our knowledge, these are the first variance-based rates for nonsmooth optimization. We give several applications of our results to statistical estimation problems and provide experimental results that demonstrate the effectiveness of the proposed algorithms. We also describe how a combination of our algorithm with recent work on decentralized optimization yields a distributed stochastic optimization algorithm that is order-optimal.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A Randomized Mirror-Prox Method for Solving Structured Large-Scale Matrix Saddle-Point Problems

In this paper, we derive a randomized version of the Mirror-Prox method for solving some structured matrix saddle-point problems, such as the maximal eigenvalue minimization problem. Deterministic first-order schemes, such as Nesterov’s Smoothing Techniques or standard Mirror-Prox methods, require the exact computation of a matrix exponential at every iteration, limiting the size of the problem...

متن کامل

A Stochastic Smoothing Algorithm for Semidefinite Programming

We use rank one Gaussian perturbations to derive a smooth stochastic approximation of the maximum eigenvalue function. We then combine this smoothing result with an optimal smooth stochastic optimization algorithm to produce an efficient method for solving maximum eigenvalue minimization problems, and detail a variant of this stochastic algorithm with monotonic line search. Overall, compared to...

متن کامل

Randomized Derivative-Free Optimization of Noisy Convex Functions∗

We propose STARS, a randomized derivative-free algorithm for unconstrained optimization when the function evaluations are contaminated with random noise. STARS takes dynamic, noise-adjusted smoothing stepsizes that minimize the least-squares error between the true directional derivative of a noisy function and its finite difference approximation. We provide a convergence rate analysis of STARS ...

متن کامل

A simple randomised algorithm for convex optimisation - Application to two-stage stochastic programming

We consider maximising a concave function over a convex set by a simple randomised algorithm. The strength of the algorithm is that it requires only approximate function evaluations for the concave function and a weak membership oracle for the convex set. Under smoothness conditions on the function and the feasible set, we show that our algorithm computes a near-optimal point in a number of ope...

متن کامل

A Smoothing Stochastic Gradient Method for Composite Optimization

We consider the unconstrained optimization problem whose objective function is composed of a smooth and a non-smooth conponents where the smooth component is the expectation a random function. This type of problem arises in some interesting applications in machine learning. We propose a stochastic gradient descent algorithm for this class of optimization problem. When the non-smooth component h...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • SIAM Journal on Optimization

دوره 22  شماره 

صفحات  -

تاریخ انتشار 2012